Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb (873 lines of code) (raw):
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "Tce3stUlHN0L"
},
"source": [
"##### Copyright 2024 Google LLC."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"cellView": "form",
"id": "tuOe1ymfHZPu"
},
"outputs": [],
"source": [
"# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dfsDR_omdNea"
},
"source": [
"# Gemma - Advanced prompting techniques\n",
"This notebook explores advanced prompt engineering techniques. These techniques can guide large language models towards more accurate and robust outputs by incorporating specific phrasing within the prompts themselves.\n",
"\n",
"<table align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Gemma/[Gemma_1]Advanced_Prompting_Techniques.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FaqZItBdeokU"
},
"source": [
"## Setup\n",
"\n",
"### Select the Colab runtime\n",
"To complete this tutorial, you'll need to have a Colab runtime with sufficient resources to run the Gemma model. In this case, you can use a T4 GPU:\n",
"\n",
"1. In the upper-right of the Colab window, select **▾ (Additional connection options)**.\n",
"2. Select **Change runtime type**.\n",
"3. Under **Hardware accelerator**, select **T4 GPU**.\n",
"\n",
"### Gemma setup\n",
"\n",
"To complete this tutorial, you'll first need to complete the setup instructions at [Gemma setup](https://ai.google.dev/gemma/docs/setup). The Gemma setup instructions show you how to do the following:\n",
"\n",
"* Get access to Gemma on kaggle.com.\n",
"* Select a Colab runtime with sufficient resources to run\n",
" the Gemma 2B model.\n",
"* Generate and configure a Kaggle username and an API key as Colab secrets.\n",
"\n",
"After you've completed the Gemma setup, move on to the next section, where you'll set environment variables for your Colab environment.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "CY2kGtsyYpHF"
},
"source": [
"### Configure your credentials\n",
"\n",
"Add your Kaggle credentials to the Colab Secrets manager to securely store it.\n",
"\n",
"1. Open your Google Colab notebook and click on the 🔑 Secrets tab in the left panel. <img src=\"https://storage.googleapis.com/generativeai-downloads/images/secrets.jpg\" alt=\"The Secrets tab is found on the left panel.\" width=50%>\n",
"2. Create new secrets: `KAGGLE_USERNAME` and `KAGGLE_KEY`\n",
"3. Copy/paste your username into `KAGGLE_USERNAME`\n",
"3. Copy/paste your key into `KAGGLE_KEY`\n",
"4. Toggle the buttons on the left to allow notebook access to the secrets.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "A9sUQ4WrP-Yr"
},
"outputs": [],
"source": [
"import os\n",
"from google.colab import userdata\n",
"\n",
"# Note: `userdata.get` is a Colab API. If you're not using Colab, set the env\n",
"# vars as appropriate for your system.\n",
"os.environ[\"KAGGLE_USERNAME\"] = userdata.get(\"KAGGLE_USERNAME\")\n",
"os.environ[\"KAGGLE_KEY\"] = userdata.get(\"KAGGLE_KEY\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iwjo5_Uucxkw"
},
"source": [
"### Install dependencies\n",
"Run the cell below to install all the required dependencies."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "r_nXPEsF7UWQ"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.1/1.1 MB\u001b[0m \u001b[31m6.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m570.5/570.5 kB\u001b[0m \u001b[31m9.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m311.2/311.2 kB\u001b[0m \u001b[31m4.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m950.8/950.8 kB\u001b[0m \u001b[31m16.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.2/5.2 MB\u001b[0m \u001b[31m35.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m589.8/589.8 MB\u001b[0m \u001b[31m2.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.3/5.3 MB\u001b[0m \u001b[31m99.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.2/2.2 MB\u001b[0m \u001b[31m89.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.5/5.5 MB\u001b[0m \u001b[31m107.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
"\u001b[?25h\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
"tf-keras 2.15.1 requires tensorflow<2.16,>=2.15, but you have tensorflow 2.16.1 which is incompatible.\u001b[0m\u001b[31m\n",
"\u001b[0m"
]
}
],
"source": [
"!pip install -q -U keras keras-nlp"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "J3sX2mFH4GWk"
},
"source": [
"## Gemma"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Fz47tAgSKMNH"
},
"source": [
"**About Gemma**\n",
"\n",
"Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights, pre-trained variants, and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.\n",
"\n",
"**Prompt formatting**\n",
"\n",
"Instruction-tuned (IT) models are trained with a specific formatter that annotates all instruction tuning examples with extra information, both at training and inference time. The formatter has two purposes:\n",
"\n",
"* Indicating roles in a conversation, such as the system, user, or assistant roles.\n",
"* Delineating turns in a conversation, especially in a multi-turn conversation.\n",
"\n",
"Below, we specify the control tokens used by Gemma and their use cases. Note that the control tokens are reserved in and specific to our tokenizer.\n",
"\n",
"* Token to indicate a user turn: `user`\n",
"* Token to indicate a model turn: `model`\n",
"* Token to indicate the beginning of dialogue turn: `<start_of_turn>`\n",
"* Token to indicate the end of dialogue turn: `<end_of_turn>`\n",
"\n",
"Here's the [official documentation](https://ai.google.dev/gemma/docs/formatting) regarding prompting instruction-tuned models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "B3WckZv2hef3"
},
"outputs": [],
"source": [
"import keras\n",
"import keras_nlp\n",
"from IPython.display import display, Markdown, Latex\n",
"\n",
"os.environ[\"KAGGLE_USERNAME\"] = userdata.get(\"KAGGLE_USERNAME\")\n",
"os.environ[\"KAGGLE_KEY\"] = userdata.get(\"KAGGLE_KEY\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "8dfseDZChhjl"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Attaching 'metadata.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'metadata.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'task.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'config.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'metadata.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'metadata.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'config.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'config.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'model.weights.h5' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'metadata.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'metadata.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'preprocessor.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'tokenizer.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'tokenizer.json' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n",
"Attaching 'assets/tokenizer/vocabulary.spm' from model 'keras/gemma/keras/gemma_1.1_instruct_2b_en/3' to your Colab notebook...\n"
]
}
],
"source": [
"# Let's load Gemma using Keras\n",
"gemma_model_id = \"gemma_1.1_instruct_2b_en\"\n",
"gemma = keras_nlp.models.GemmaCausalLM.from_preset(gemma_model_id)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9YCKxIm30hVe"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<start_of_turn>user\n",
"Who are you?<end_of_turn>\n",
"<start_of_turn>model\n",
"\n"
]
}
],
"source": [
"from typing import List\n",
"from IPython.display import display, Markdown, Latex\n",
"\n",
"\n",
"# Helpers\n",
"def convert_message_to_prompt(message: str, model_prefix: str = \"\") -> str:\n",
" \"\"\"Converts a message to a prompt for a large language model.\n",
"\n",
" Args:\n",
" message: The message to convert (str).\n",
" model_prefix: An optional prefix to prepend to the model response (str).\n",
"\n",
" Returns:\n",
" A string containing the prompt for the large language model (str).\n",
" \"\"\"\n",
"\n",
" return (\n",
" f\"<start_of_turn>user\\n{message}<end_of_turn>\\n\"\n",
" f\"<start_of_turn>model\\n{model_prefix}\"\n",
" )\n",
"\n",
"\n",
"# Example output\n",
"print(convert_message_to_prompt(\"Who are you?\"))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pOAEiJmnBE0D"
},
"source": [
"# Advanced prompt techniques"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "WdswWha8dFpH"
},
"source": [
"## Zero-shot prompting\n",
"Think of zero-shot prompting like giving instructions to a super smart machine. Instead of showing it how to do something new, you just tell it clearly what you want (like translating a sentence or understanding emotions). This machine, called a Large Language Model, uses its vast knowledge of language to try its best, even if it hasn't been specifically trained for that exact task."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "6uSdtrdezCwd"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Positive\n"
]
}
],
"source": [
"prompt = \"\"\"Classify the text into neutral, negative, or positive.\n",
"Generate only the class, nothing else.\n",
"Text: I think the food was awesome.\"\"\"\n",
"\n",
"prompt = convert_message_to_prompt(prompt)\n",
"response = gemma.generate(prompt, max_length=64)\n",
"print(response[len(prompt) :])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qtICQrhdd7G_"
},
"source": [
"## Few-shot prompting\n",
"Few-shot prompting is like showing your friend a quick example first. You give a Large Language Model a few examples of how to solve a new task (like translating a sentence) and it uses those examples to try its best, even without being fully trained on that task. The model doesn't permanently learn these examples, but it uses them for the current request."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "1uD0Eouhd0wu"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"#MovieLover #Movies #MyMovies #BestMovie #MovieOfTheYear\n"
]
}
],
"source": [
"prompt = \"\"\"Generate a single line of hashtags for the given topic by in the same style as the following examples:\n",
"\n",
"Topic: Books\n",
"#BooksLover #Books #MyBooks #BestBook #BookOfTheYear\n",
"\n",
"Topic: Games\n",
"#GamesLover #Games #MyGames #BestGame #GameOfTheYear\n",
"\n",
"Topic: Movies\n",
"\"\"\"\n",
"\n",
"prompt = convert_message_to_prompt(prompt)\n",
"response = gemma.generate(prompt, max_length=128)\n",
"print(response[len(prompt) :])"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9oR5CabB5Dbu"
},
"source": [
"## Zero-shot Chain-of-Thought\n",
"Technique based on the simple idea of not adding an example to the prompt, but instead the instruction \"think step by step,\" which has proven to be very effective. This forces the model to focus on a single step at a time.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "WPDsRxvrzCsQ"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"You have 15 cupcakes. You gave 5 to your friends, so you have 15 - 5 = 10 cupcakes left. You baked 4 more cupcakes, so you have 10 + 4 = 14 cupcakes right now.\n"
]
}
],
"source": [
"# Just zero-shot prompting\n",
"prompt = \"\"\"I baked 15 cupcakes for a bake sale. I wanted to share some\n",
"with my friends so I gave 5 to my friends. Needing a few more\n",
"for taste testing, I baked 4 more cupcakes.\n",
"Later, I couldn't resist and a I ate 1 cupcake by myself.\n",
"How many cupcakes do I have right now?\"\"\"\n",
"\n",
"prompt = convert_message_to_prompt(prompt)\n",
"response = gemma.generate(prompt, max_length=512)\n",
"print(response[len(prompt) :])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "j3Zb78ZizCqF"
},
"outputs": [
{
"data": {
"text/markdown": [
"**Step 1:** After sharing with friends, you have 10 cupcakes left.\n",
"\n",
"\n",
"**Step 2:** You bake 4 more cupcakes, so you have 14 cupcakes in total.\n",
"\n",
"\n",
"**Step 3:** You eat 1 cupcake, leaving you with 13 cupcakes."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Zero-shot + Chain-of-Thought\n",
"prompt = \"\"\"I baked 15 cupcakes for a bake sale. I wanted to share some\n",
"with my friends so I gave 5 to my friends. Needing a few more\n",
"for taste testing, I baked 4 more cupcakes.\n",
"Later, I couldn't resist and a I ate 1 cupcake by myself.\n",
"How many cupcakes do I have right now? Explain your thinking step by step\n",
"including the number of cupcakes per step.\"\"\"\n",
"\n",
"prompt = convert_message_to_prompt(prompt)\n",
"response = gemma.generate(prompt, max_length=512)\n",
"display(Markdown(response[len(prompt) :]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KaUN_C4Lntah"
},
"source": [
"As you can see the second answer is the correct one."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FEK_8aK641Kp"
},
"source": [
"## Tree of Thoughts (ToT)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "gf3zr1qOkxCi"
},
"source": [
"Tree-of-Thought (ToT) Prompting is a new method that enhances the existing Chain-of-Thought (CoT) technique, enabling AI models like Gemma to think more effectively and correct their own mistakes over time. By allowing the model to explore multiple lines of reasoning and self-correct as it goes, ToT prompting can improve the accuracy of responses, even for complex questions.\n",
"\n",
"For example, if asked where a ball is after a series of actions, Gemma initially might give the wrong answer. However, using a ToT prompt, where the model considers different perspectives and corrects itself, Gemma can arrive at the right answer.\n",
"\n",
"The CoT technique encourages the model to explain its reasoning step-by-step, but it doesn't always guarantee the right answer. ToT takes this further by having the model simulate a discussion among multiple experts, who each contribute and correct each other, leading to a more accurate final answer.\n",
"\n",
"While this approach shows promise, it's still being tested and refined. The idea is similar to how teams of people can often make better decisions than individuals by discussing and evaluating different viewpoints. This technique could lead to more reliable and intelligent answers.\n",
"\n",
"You can read more about this technique [here](https://www.promptingguide.ai/techniques/tot)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Pe7C68z942Lp"
},
"outputs": [
{
"data": {
"text/markdown": [
"**1kg of feathers is lighter than 1kg of stones.**\n",
"\n",
"This is because feathers are much lighter than stones."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Zero-shot prompting\n",
"prompt = \"\"\"Which is heavier? 1kg of feathers or 1kg of stones?\"\"\"\n",
"\n",
"prompt = convert_message_to_prompt(prompt)\n",
"response = gemma.generate(prompt, max_length=512)\n",
"display(Markdown(response[len(prompt) :]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "-anu2KCy0is9"
},
"outputs": [
{
"data": {
"text/markdown": [
"**Round 1:**\n",
"\n",
"**Expert 1:** The feathers are significantly lighter. They are composed primarily of air, while stones are dense and have a substantial amount of mass associated with their size and composition.\n",
"\n",
"**Expert 2:** While feathers are indeed much lighter, it's important to consider the surface area of the objects. Feathers have a large surface area-to-mass ratio, making them relatively easy to lift and move.\n",
"\n",
"**Expert 3:** True, but the surface area of the stones also plays a role. The larger the stone, the greater its surface area, and the heavier it will feel.\n",
"\n",
"**Round 2:**\n",
"\n",
"**Expert 1:** The surface area argument is valid, but it's crucial to note that the weight of an object is determined by its mass and the force of gravity acting on it.\n",
"\n",
"**Expert 2:** While mass is a fundamental property of an object, it's the force of gravity that determines its weight. The mass of 1kg of feathers is the same as the weight of 1kg of stones, regardless of their shape or surface area.\n",
"\n",
"**Expert 3:** You're right. The force of gravity pulling the objects down is the same. However, the feathers' surface area allows them to interact more with the air, creating a resistance that makes them feel lighter.\n",
"\n",
"**Round 3:**\n",
"\n",
"**Expert 1:** The resistance due to surface area is a valid point, but it's important to consider the overall context.\n",
"\n",
"**Expert 2:** The weight is determined by the force of gravity and the mass. In this case, the force of gravity is the same for both objects.\n",
"\n",
"**Expert 3:** True, but the feathers' surface area creates a significant additional force that makes them feel lighter.\n",
"\n",
"**Conclusion:**\n",
"\n",
"After three rounds of debate, all three experts agree that the weight of 1kg of feathers and 1kg of stones is the same. The force of gravity pulling them down is the same, but the feathers' surface area creates a resistance that makes them feel lighter."
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"# Tree of Thoughts (ToT)\n",
"prompt = \"\"\"Simulate a multi turn debate of three experts.\n",
"They need to answer the following question: Which is heavier? 1kg of feathers or 1kg of stones?\n",
"They need to debate in rounds and provide explaination until they reach the same conclusion.\"\"\"\n",
"\n",
"prompt = convert_message_to_prompt(prompt)\n",
"response = gemma.generate(prompt, max_length=512)\n",
"display(Markdown(response[len(prompt) :]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "fNjjuPtj-hwY"
},
"source": [
"## Prompt Chaining"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kmdzwbgIlanL"
},
"source": [
"When working with complex tasks, large language models (LLMs) can benefit from a technique called prompt chaining. This involves breaking the task down into smaller, more manageable subtasks. The LLM then receives prompts for each subtask, with the output from one feeding into the next. This step-by-step approach is not only more effective for LLMs than a single, detailed prompt, but it also offers advantages like greater transparency and easier debugging. This makes prompt chaining especially valuable for building conversational assistants and personalizing user experiences."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "AGfsyHUAtkfy"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--- After first prompt: ---\n"
]
},
{
"data": {
"text/markdown": [
"- Paris\n",
"- Tokyo\n",
"- Chicago"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"prompt_1 = \"\"\"\n",
"Extract all city names from the following text:\n",
"\n",
"The aroma of fresh bread wafted through the Paris market, tempting Amelia\n",
"as she hurried to catch her flight to Tokyo. She dreamt of indulging in steaming\n",
"ramen after a whirlwind tour of ancient temples. Back home in Chicago,\n",
"she'd recount her adventures, photos filled with Eiffel Tower selfies\n",
"and neon-lit Tokyo nights.\n",
"\"\"\"\n",
"prompt_1 = convert_message_to_prompt(prompt_1)\n",
"prompt_1_response = gemma.generate(prompt_1, max_length=512)\n",
"\n",
"print(\"--- After first prompt: ---\")\n",
"display(Markdown(prompt_1_response[len(prompt_1) :]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Q937pjao-kem"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--- After second prompt: ---\n"
]
},
{
"data": {
"text/markdown": [
"```python\n",
"cities = ['PARIS', 'TOKYO', 'CHICAGO']\n",
"```"
],
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"previous_response = {prompt_1_response[len(prompt_1) :]}\n",
"\n",
"prompt_2 = f\"\"\"Convert the following cities into a valid Python list.\n",
"Make it uppercase and remove all unecessary characters:\\n{previous_response}\"\"\"\n",
"prompt_2 = convert_message_to_prompt(prompt_2)\n",
"prompt_2_response = gemma.generate(prompt_2, max_length=512)\n",
"\n",
"print(\"--- After second prompt: ---\")\n",
"display(Markdown(prompt_2_response[len(prompt_2) :]))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "p1T72UYj_WYu"
},
"source": [
"## Simple tool calling"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "0MbFMK1oligh"
},
"source": [
"Function calling allows large language models (LLMs) to connect with external tools and APIs. By using the right prompt LLMs can identify when to call a specific function and provide the arguments (e.g. in JSON format). These functions act like tools within the AI application, allowing for data retrieval, external interaction, and ultimately more capable chatbots and agents."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "pInt8bzJZ9hU"
},
"outputs": [],
"source": [
"import random\n",
"import time\n",
"from datetime import datetime\n",
"\n",
"# Let's define some functions that can be used by the LLM\n",
"\n",
"\n",
"def get_weather_data():\n",
" print(\"(py function) fetching the current weather\")\n",
" return random.choice([\"Rainy\", \"Sunny\", \"Snowing\"])\n",
"\n",
"\n",
"def get_today_date():\n",
" print(\"(py function) fetching the current date\")\n",
" return datetime.today().strftime(\"%Y-%m-%d\")\n",
"\n",
"\n",
"def get_time():\n",
" print(\"(py function) fetching the current time\")\n",
" return datetime.today().strftime(\"%H:%M:%S\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "HAzWZrw3Z-KR"
},
"outputs": [],
"source": [
"# Mapping text -> function\n",
"mapping = {\n",
" \"get_weather_data\": get_weather_data,\n",
" \"get_today_date\": get_today_date,\n",
" \"get_time\": get_time,\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "F9gEvfyEzChz"
},
"outputs": [],
"source": [
"def ask_model_with_fn_calling(question: str) -> str:\n",
" prompt = f\"\"\"Your job is to select the best tool for the given query. Only respond with the name of the tool.\n",
" When you get tool output, format the output message to answer user question.\n",
"\n",
" Available tools:\n",
" - get_weather_data: Checks the current weather\n",
" - get_today_date: Retruns today's date\n",
" - get_time: Returns current time in local timezone\n",
"\n",
" Examples:\n",
" User: What's the weather?\n",
" Tool: get_weather_data\n",
"\n",
" User: What's the time?\n",
" Tool: get_time\n",
" Tool output: 21:00\n",
" Answer: The current time is: 21:00\n",
"\n",
" User: Give me today's date\n",
" Tool: get_today_date\n",
" Tool output: 21-10-2024\n",
" Answer: Today's date is 21-10-2024\n",
"\n",
" User: What's the weather?\n",
" Tool: get_weather_data\n",
" Tool output: Rainy\n",
" Answer: It's rainy today.\n",
"\n",
" User: {question}\"\"\"\n",
"\n",
" # Retrieving the tool name\n",
" prompt = convert_message_to_prompt(prompt, \"Tool:\")\n",
" response = gemma.generate(prompt, max_length=512)\n",
"\n",
" # Let's extract the tool name\n",
" command = lines = response.split(\"\\n\")[-1]\n",
" tool_name = command.split(\" \")[-1]\n",
"\n",
" # Calling the fucntion\n",
" try:\n",
" output = mapping[tool_name]()\n",
" except KeyError:\n",
" print(f\"Model decided to use function called '{tool_name}' which doesnt exist.\")\n",
" return\n",
"\n",
" # Generating the answer with all the data\n",
" new_prompt = response + f\"\\nTool output: {output}\\nAnswer: \"\n",
" new_response = gemma.generate(new_prompt, max_length=512)\n",
" print(new_response[len(new_prompt) :])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "yhZZJUT8-ofk"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(py function) fetching the current weather\n",
"\n",
"It's rainy today.\n"
]
}
],
"source": [
"ask_model_with_fn_calling(\"What's the weather today?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "2NejEvUYZ_jb"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(py function) fetching the current time\n",
"16:59:52\n"
]
}
],
"source": [
"ask_model_with_fn_calling(\"What's the time?\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "QqvpaKC3aMnO"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(py function) fetching the current date\n",
"2024-06-16\n"
]
}
],
"source": [
"ask_model_with_fn_calling(\"Answer with today's date\")"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "[Gemma_1]Advanced_Prompting_Techniques.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}